64 results
Development, implementation, and dissemination of operational innovations across the trial innovation network
- Marisha E. Palm, Terri L. Edwards, Cortney Wieber, Marie T. Kay, Eve Marion, Leslie Boone, Angeline Nanni, Michelle Jones, Eilene Pham, Meghan Hildreth, Karen Lane, Nichol McBee, Daniel K. Benjamin, Jr, Gordon R. Bernard, J. Michael Dean, Jamie P. Dwyer, Daniel E. Ford, Daniel F. Hanley, Paul A. Harris, Consuelo H. Wilkins, Harry P. Selker
-
- Journal:
- Journal of Clinical and Translational Science / Volume 7 / Issue 1 / 2023
- Published online by Cambridge University Press:
- 20 October 2023, e251
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Improving the quality and conduct of multi-center clinical trials is essential to the generation of generalizable knowledge about the safety and efficacy of healthcare treatments. Despite significant effort and expense, many clinical trials are unsuccessful. The National Center for Advancing Translational Science launched the Trial Innovation Network to address critical roadblocks in multi-center trials by leveraging existing infrastructure and developing operational innovations. We provide an overview of the roadblocks that led to opportunities for operational innovation, our work to develop, define, and map innovations across the network, and how we implemented and disseminated mature innovations.
Probing the cold magnetised Universe with SPICA-POL (B-BOP)
- Part of
- Ph. André, A. Hughes, V. Guillet, F. Boulanger, A. Bracco, E. Ntormousi, D. Arzoumanian, A.J. Maury, J.-Ph. Bernard, S. Bontemps, I. Ristorcelli, J.M. Girart, F. Motte, K. Tassis, E. Pantin, T. Montmerle, D. Johnstone, S. Gabici, A. Efstathiou, S. Basu, M. Béthermin, H. Beuther, J. Braine, J. Di Francesco, E. Falgarone, K. Ferrière, A. Fletcher, M. Galametz, M. Giard, P. Hennebelle, A. Jones, A. A. Kepley, J. Kwon, G. Lagache, P. Lesaffre, F. Levrier, D. Li, Z.-Y. Li, S. A. Mao, T. Nakagawa, T. Onaka, R. Paladino, N. Peretto, A. Poglitsch, V. Revéret, L. Rodriguez, M. Sauvage, J. D. Soler, L. Spinoglio, F. Tabatabaei, A. Tritsis, F. van der Tak, D. Ward-Thompson, H. Wiesemeyer, N. Ysard, H. Zhang
-
- Journal:
- Publications of the Astronomical Society of Australia / Volume 36 / 2019
- Published online by Cambridge University Press:
- 02 August 2019, e029
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
Space Infrared Telescope for Cosmology and Astrophysics (SPICA), the cryogenic infrared space telescope recently pre-selected for a ‘Phase A’ concept study as one of the three remaining candidates for European Space Agency (ESA's) fifth medium class (M5) mission, is foreseen to include a far-infrared polarimetric imager [SPICA-POL, now called B-fields with BOlometers and Polarizers (B-BOP)], which would offer a unique opportunity to resolve major issues in our understanding of the nearby, cold magnetised Universe. This paper presents an overview of the main science drivers for B-BOP, including high dynamic range polarimetric imaging of the cold interstellar medium (ISM) in both our Milky Way and nearby galaxies. Thanks to a cooled telescope, B-BOP will deliver wide-field 100–350 $\mu$m images of linearly polarised dust emission in Stokes Q and U with a resolution, signal-to-noise ratio, and both intensity and spatial dynamic ranges comparable to those achieved by Herschel images of the cold ISM in total intensity (Stokes I). The B-BOP 200 $\mu$m images will also have a factor $\sim $30 higher resolution than Planck polarisation data. This will make B-BOP a unique tool for characterising the statistical properties of the magnetised ISM and probing the role of magnetic fields in the formation and evolution of the interstellar web of dusty molecular filaments giving birth to most stars in our Galaxy. B-BOP will also be a powerful instrument for studying the magnetism of nearby galaxies and testing Galactic dynamo models, constraining the physics of dust grain alignment, informing the problem of the interaction of cosmic rays with molecular clouds, tracing magnetic fields in the inner layers of protoplanetary disks, and monitoring accretion bursts in embedded protostars.
Probing the Baryon Cycle of Galaxies with SPICA Mid- and Far-Infrared Observations
- Part of
- F. F. S. van der Tak, S. C. Madden, P. Roelfsema, L. Armus, M. Baes, J. Bernard-Salas, A. Bolatto, S. Bontemps, C. Bot, C. M. Bradford, J. Braine, L. Ciesla, D. Clements, D. Cormier, J. A. Fernández-Ontiveros, F. Galliano, M. Giard, H. Gomez, E. González-Alfonso, F. Herpin, D. Johnstone, A. Jones, H. Kaneda, F. Kemper, V. Lebouteiller, I. De Looze, M. Matsuura, T. Nakagawa, T. Onaka, P. Pérez-González, R. Shipman, L. Spinoglio
-
- Journal:
- Publications of the Astronomical Society of Australia / Volume 35 / 2018
- Published online by Cambridge University Press:
- 18 January 2018, e002
-
- Article
-
- You have access Access
- HTML
- Export citation
-
The SPICA mid- and far-infrared telescope will address fundamental issues in our understanding of star formation and ISM physics in galaxies. A particular hallmark of SPICA is the outstanding sensitivity enabled by the cold telescope, optimised detectors, and wide instantaneous bandwidth throughout the mid- and far-infrared. The spectroscopic, imaging, and polarimetric observations that SPICA will be able to collect will help in clarifying the complex physical mechanisms which underlie the baryon cycle of galaxies. In particular, (i) the access to a large suite of atomic and ionic fine-structure lines for large samples of galaxies will shed light on the origin of the observed spread in star-formation rates within and between galaxies, (ii) observations of HD rotational lines (out to ~10 Mpc) and fine structure lines such as [C ii] 158 μm (out to ~100 Mpc) will clarify the main reservoirs of interstellar matter in galaxies, including phases where CO does not emit, (iii) far-infrared spectroscopy of dust and ice features will address uncertainties in the mass and composition of dust in galaxies, and the contributions of supernovae to the interstellar dust budget will be quantified by photometry and monitoring of supernova remnants in nearby galaxies, (iv) observations of far-infrared cooling lines such as [O i] 63 μm from star-forming molecular clouds in our Galaxy will evaluate the importance of shocks to dissipate turbulent energy. The paper concludes with requirements for the telescope and instruments, and recommendations for the observing strategy.
26 - CMB Data Processing
- from Part V - Precision Tools for Precision Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 606-633
-
- Chapter
- Export citation
-
Summary
The CMB data comes from many different kinds of experiment, and in the competition between the small balloon-based missions and huge satellite-borne programs it is occasionally the former who win. But in the end, the high quality sky-wide data comes from space experiments. During this century, two have dominated cosmology so far: WMAP and Planck.
In this chapter we describe how the data is acquired, how it is stored in special format maps, and how it is treated to remove all the non-cosmological contributions. After this has been done we have our data, but it is the need to remove noise and foreground contamination that is the most challenging, and the most demanding of computing resources.
We treat the problem of noise and foreground removal in as generic a way as possible, there is not a single best way of doing this. The mathematical formalism is quite complex, as is the statistical data analysis that will follow.
Introduction
Some of the history of the observations of the Cosmic Microwave Background radiation (CMB) has been recounted in the first part of this book (see Chapter 3). The culmination of the early efforts, the ‘first 30 years’, was perhaps the COBE DMR/FIRAS mission. COBE DMR produced the first all-sky maps of the CMB (Smoot et al., 1991), while the FIRAS experiment on the same satellite had established the remarkable accuracy of the Planckian form of the radiation spectrum (Mather and the COBE collaboration, 1990). Importantly, COBE had delivered a low resolution, albeit noisy, picture of the microwave sky that not only made substantial scientific advances, but also attracted widespread public attention and provided a vital scientific stimulus.
The manifest success of the COBE mission inspired a series of ground-based, balloon-based and and space observatories to look at the fluctuations with more sensitivity and at higher angular resolution. Satellite experiments are expensive and take a long time to build and so the competition to get the key results from lower cost ground based experiments before the space experiments could deliver was fierce, and in large part successful.
24 - Frequentist Hypothesis Testing
- from Part V - Precision Tools for Precision Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 567-581
-
- Chapter
- Export citation
-
Summary
In this chapter we cover one particular aspect of frequentist statistics in order to be able to compare and contrast the approach with the Bayesian approach that we shall be discussing shortly. In much of the world that uses statistics, the principle objective appears to be to test hypotheses about given data and come to some conclusion. The conclusion might be the answer to either of the questions: ‘Is this supported by the data or not?’, or ‘Which is the better descriptor of the data?’. Control samples are commonly used as an alterrnative against which the data will be evaluated. Whatever the question, the questioner expects a quantitative answer.
In cosmology we only have the one Universe as a source of data, and we have no other Universe that can act as a control. Of course we do have numerical simulations that can play that role. Our goal is often to fit a parameterised model to data and we can reasonably ask how confident we should be that this is the ‘best’ answer.
Classical Inference
In generic terms, the goal of statistical analysis is to discover how some factor Y responds to changes in some input, or set of inputs X. The observations (x, y) are paired, and while the values of x are generally under control of the experimentalist, the response y is subject to measurement errors ∊. The mechanism for doing this may take the form of establishing a model for the dependence of Y on X, or might be to discover which of the factors X, or which combination of the elements of X, are the main contributory factors to the response Y. Here we will consider only the first of these, fitting parameterised models.
Linear Regression
The simplest data model, linear regression is commonplace and involves determining parameters α and β in the relationship of the form
Y = AX + ∊, (24.1)
where the experiment consists of observing the values of Y, the response, resulting from treatmentsX. The observations Y are subject to errors ∊.
13 - Space-Time Geometry and Calculus
- from Part III - Relativistic Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 307-339
-
- Chapter
- Export citation
-
Summary
Unlike Newton's theory of gravitation, Einstein's theory of general relativity views the gravitational force as a manifestation of the geometry of the underlying space-time. The idea sounds good, it evokes images of billiard balls rolling around on tables with hills and valleys where the balls’ otherwise rectilinear motion is disturbed by the geometry of their environment. The difficulty is how to achieve the parallel goal of expressing the force of gravitation geometrically, and to do so without destroying all that we have learned about physics in our local environment.
As we saw in the previous chapter, Einstein saw the principles of covariance and equivalence as a way of formalising that. The theory of gravitation should always admit local inertial frames in which our known laws of physics would hold. Moreover, the mathematical expression of the laws of physics would be the same in all inertial frames. A key step at this point was to argue that physical entities are described by mathematical objects that transform correctly under local Lorentz transformations. This brings us to Minkowski's use of 4-vectors and tensors as the mathematical embodiment of physical quantities.
However, these are local statements, not global ones. They do not tell us how the geometry would affect two widely separated inertial observers in the presence of a gravitational field. The clue was given to Einstein by Marcel Grossmann who suggested that this link would be provided by insisting that the underlying geometry was the geometry of a Riemannian space. This provides the structure to address global issues and to connect different parts of the space.
In this chapter we describe this process and provide the mathematical structure that arises when we follow up on Grossmann's plan. We learn about connecting parts of the space-time and we establish notions of derivatives, geodesics and measures of the curvature.
A Geometric Perspective
The space time of Einstein's theory is specified by its geometry. A gravitational field is thought of as distorting the space-time away from the no-gravity space-time of Minkowski. So, just as the Minkowski space of special relativity is entirely specified by a metric tensor or line element telling us what the space time separation of neighbouring points is, the space-time of the general theory is specified by a more general metric.
4 - Recent Cosmology
- from Part I - 100 Years of Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 77-94
-
- Chapter
- Export citation
-
Summary
It could be said that the discovery and subsequent exploration of the cosmic microwave background radiation marked a transition from cosmology as a branch of astronomy that was largely a philosophical endeavour to cosmology as an astrophysical discipline that is now a fully fledged branch of physics. The CMB established a secure physical framework within which we could undertake sophisticated experiments that would define the parameters of that framework with ever greater precision. As understanding has grown, more and more of traditional astronomy has been embraced to provide evidence in support of this new paradigm: we have garnered evidence from the study of stars, such as supernovae, of galaxies and their velocity fields, and from observations of galaxy clusters and clustering. These studies have involved ground-based and space-based experiments at all wavelengths from radio to gamma-rays.
‘Precision Cosmology’ was born and we now have the recognised disciplines of ‘astro-particle physics’, ‘astro-statistics’ and ‘numerical cosmology’, to name but a few.
Fifty years on from the discovery of the CMB a number of issues have been clarified, but many more remain. Among the numerous areas of active research in cosmology today, there are three which have particular bearing on the first half million years: dark matter, dark energy and gravitational waves.
In the Aftermath of the CMB
The discovery of the CMB not only served to establish our cosmological paradigm as the Hot Big Bang theory, it also stimulated a growth in cosmology as a branch of physics. Fifty years later cosmology has reached a depth and precision of understanding that would have been inconceivable prior to 1965. The 50 years from 1965–2015 saw remarkable advances on both theoretical and data-acquisition and analysis fronts, which are described briefly in this chapter. One important consequence is the blurring of the boundary between theory and observation. Theoretical advances have driven experiments to gather and analyse data through which the theory could be exploited, while ever more sophisticated experiments demand improved methods of data analysis and impose constraints on the values of the parameters of the theories.
It has previously been said that the essence of physics, and science in general, is that it should be possible to perform experiments and to have other groups repeat those experiments, thus providing verification of the results and the consequent conclusions.
25 - Statistical Inference: Bayesian
- from Part V - Precision Tools for Precision Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 582-605
-
- Chapter
- Export citation
5 - Newtonian Cosmology
- from Part II - Newtonian Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 97-129
-
- Chapter
- Export citation
-
Summary
We will use Newtonian versions of solutions to the Einstein field equations to describe a series of model universes that provide a framework within which we can better understand how our Universe works. Newton's theory of gravitation has been replaced by Einstein's, but in many respects Newton's theory is a pretty good approximation: good enough that we depend on it in our everyday lives. The Newtonian view is certainly easier for us to relate to and exploit, but we must understand the inherent limitations.
Here, we highlight the fundamental differences between the Newtonian and Einsteinian theories: Newton with his absolute space and universal time, and Einstein with his geometrisation of gravity. Fortunately, there are some relevant solutions of the Einstein equations that have direct Newtonian analogues. Those Newtonian analogues are lacking some important features, notably a lack of a description of how light propagates. Fortunately, we can graft the information from some of the Einstein models onto the Newtonian models to produce what we might call Newtonian surrogates of the Einsteinian models.
In this chapter we introduce the simplest of a series of homogeneous and isotropic cosmological models formulated within the limited framework of Newtonian gravity. These models contains only ‘dust’: pressure free matter made up of particles that are neither created nor destroyed, and that do not interact with one another. The Universe evolves under the mutual gravitational interaction of those particles.
While this model is not realistic it nevertheless allows us to develop a full cosmological model that is the template for the more complex models that follow. We develop these models in considerable detail since much of what is done will be repeated for the other, more realistic, models.
It is worth remarking that these models are fundamental to numerical N-body simulations of the Universe in which the constituent particles are ‘dust’. Such dust models can also be used to study the growth of the structure in the Universe.
Why Bother with Newton?
Two Views of Gravity
The expansion of the Universe is dominated by the gravitational force. The best theory we have for the gravitational force is Einstein's Theory of General Relativity (Einstein, 1916a), which relates geometry with the material content of the space-time containing that matter.
Contents
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp vii-viii
-
- Chapter
- Export citation
Frontmatter
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp i-iv
-
- Chapter
- Export citation
20 - Recombination of the Primeval Plasma
- from Part IV - The Physics of Matter and Radiation
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 476-490
-
- Chapter
- Export citation
-
Summary
The transition the Universe makes from a fully ionised plasma to a neutral gas happens very quickly when the age of the Universe is around 380 000 years, i.e. a redshift of z ~ 1500. From our point of view, looking out into the distant Universe, and hence into the past, we see this as a cosmic photosphere. This is referred to as the surface of last scattering since this corresponds to the time after which most of the CMB photons travel freely towards us. The event of the Universe becoming neutral is referred to as the epoch of recombination.
The image of this surface we create with our radiometers is a snapshot of the Universe as it was when it was still only slightly inhomogeneous: we see the initial conditions for the birth of cosmic structure.
Recombination
The photons of the CMB that we observe come from the epoch at a redshift z ~ 1090 when the cosmic plasma was almost neutral and the timescale for electron–photon collisions became longer than the cosmic expansion timescale. The Universe was then some ~ 380 000 years old. Prior to that time, electrons and photons had been held in thermodynamic equilibrium by the Thomson scattering process, while after that time the baryonic component of the cosmic plasma could evolve as an independent component of the plasma.
This was the time of the decoupling of matter from the radiation field. Shortly after that time most of the CMB photons were able to travel directly to us now without being scattered by free electrons. This gave us a direct view of the early conditions that led to the formation of the cosmic structures we see today, the galaxies, clusters of galaxies and their organisation into what is now known as the ‘cosmic web’.
The Universe did not suddenly become neutral as it emerged from the fully ionised fire-ball phase of the cosmic expansion. The neutralisation, or ‘recombination’, of the cosmic plasma took place over several tens of thousands of years. That is enough to slightly blur the details of the structure that might be observed on the last scattering surface: we are looking into the cosmic photosphere.
7 - The Early Universe
- from Part II - Newtonian Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 154-186
-
- Chapter
- Export citation
-
Summary
The study of the very early Universe has given birth to a discipline that is broadly referred to as astro-particle physics, a subject that sits on the border of high energy particle physics, nuclear physics and astrophysics. Astro-particle physics has proven to be a wonderful symbiosis of experimental physics and experimental astrophysics in which the early Universe is, in effect, a high energy physics laboratory. Arguably, it was Gamow and his collaborators, in the immediate post World War II decade, who took the first tentative steps in this direction by arguing that the early Universe was the site of synthesis of the chemical elements.
The physics domain of cosmic nucleosynthesis is the first few minutes of the Big Bang. Hayashi (1950) took the first step further back into the past and towards the Big Bang itself with his exploitation of weak interactions to describe what had happened just prior to the period of nucleosynthesis. Since then, particle experiments have pushed back our understanding of the physics of matter to the point where we can now discuss the period before the first micro-seconds of the cosmic origin within the context of known high energy physics.
Early Thermal History
Astro-particle physics inevitably brings in an even wider domain of physics than the classical cosmology of general relativistic models. Some fine texts have been written on this subject from a variety of points of view and at different levels. The approach used here is to explain the physics of the early Universe with a view to understanding what precision cosmology has to say about the particle and nuclear physics aspects of the earliest moments after the Big Bang.
The first steps in this direction were taken by Gamow and his co-workers during the first decade after the Second World War. Gamow had realised that the wartime research into nuclear physics could answer some important questions about his concept of the early Universe that he had started working on prior to 1939. We can trace back Gamow's interest in a Big Bang Universe to Gamow and Teller (1939a,b) in which he had described a simple idea for the formation of galaxies, and no doubt his interest extended back to his brief association with Friedmann.
1 - Emerging Cosmology
- from Part I - 100 Years of Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 3-18
-
- Chapter
- Export citation
-
Summary
The Universe is as it is because it was as it was
Herman Bondi,
Lectures at King's College London, 1965
Introduction
Almost every civilisation throughout history has had a cosmology of some kind. By this we mean a description of the Universe in which they live based on their state of knowledge. The Vikings, for example, had a complex cosmology in which the world and its inhabitants were controlled by a set of Gods, both good and bad. Nature was ruled not by the laws of physics, but by the forces of nature controlled by the whims of these Gods. However bizarre that may seem to us now, at the time this belief-set dominated human behaviour: its social mores and values.
Today we live in a Universe that is described by physical laws. What is remarkable is that these laws have more often than not been discovered on the basis of laboratory experiments, and subsequently found to work on the vastest scales imaginable. That fact leads us to believe that our explanations of the Universe are a valid description of what is actually happening. We do not need to invoke special laws just to explain the cosmos and our position within it.
The physical laws governing the Universe and its constituents were discovered over a period of several centuries. Some might say this path to realisation started with Copernicus putting the Sun at the centre of everything rather than the Earth. Others might argue that this was merely descriptive and that knowledge of the laws started to emerge following on from the work of Kepler, Galileo and Newton. However one sees it, by the beginning of the 20th century, with Einstein's Theory of Relativity, the scene was set to embark on a journey of observational cosmology which 100 years later would lead to most scientists agreeing that we have a self-consistent theory of the Universe based on known laws of physics. Some, no doubt, would go as far as to say that the current view was incontrovertible.
Just as the early map makers measured and marked out our planet, the cosmographers of the 20th century marked out and mapped the Universe.
2 - The Cosmic Expansion
- from Part I - 100 Years of Cosmology
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 19-47
-
- Chapter
- Export citation
-
Summary
The inter-war years, 1918–1939, were a period of coming to terms (a) with Einstein's General Relativity and (b) with Hubble's discovery of the redshift–distance relationship. By the end of the period our cosmological framework was understood well enough in terms of an expanding homogeneous and isotropic solution of the Einstein equations and it probably seemed a matter of acquiring redshift in order to settle the parameters of the model. Two parameters would do the job.
It could not have been imagined that by 1955 there would be a heated argument between two camps: Gamow, who said there was a Hot Big Bang, and Hoyle, Bondi and Gold who said there was not. Added to that was another heated, even acrimonious, argument about the interpretation of the counts of recently discovered radio sources made by Ryle in Cambridge, England and Mills in Sydney, Australia. Ten year after that we had the Quasars and Cosmic Microwave Background Radiation (CMB) that, at the time, not everyone believed was cosmic in origin.
This chapter relates some of that story. It is an essential part of explaining how come we are where we are.
Models of the Cosmic Expansion
Several spatially homogeneous solutions of the equations of the General Theory of Relativity were discovered within the first decade following their publication (Einstein, 1916a). At that time relatively little was known about the Universe: it was still uncertain whether or not the nebulae were merely parts of our own Galaxy, although, through the pioneering work of Slipher, it was known that most of them were rushing away from us. Little or nothing was known about the homogeneity or isotropy of the Universe, but the assumption of homogeneity and isotropy would simplify the largely intractable Einstein equations. During the decade following the publication of the Einstein equations several important cosmological solutions were discovered. These are discussed next.
It is interesting in this context to read and compare the texts of Eddington (1923), published before Edwin Hubble's study of the nebulae and his consequent discovery of the cosmic expansion, and the text of Tolman (1934b) which was published shortly thereafter. The search for understanding the Universe in terms of models is beautifully described in Michael Heller's Ultimate Explanations of the Universe (Heller, 2010).
Appendix G - Acknowledgements
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 700-701
-
- Chapter
- Export citation
-
Summary
Use of Figures
The publications for which a licence to re-use the material had to be obtained from the publisher, or their agent, are as follows:
• Figure 1.6 reproduced from Gamow (1956, Fig. 1. p.1731) with permission (©Elsevier 1956).
• Figure 3.2 reproduced from Bortolot et al. (1969, Fig. 1 p.308) with permission, ©1969 American Physical Society.
• Figure 3.7 (right) reproduced from Sunyaev and Zel'dovich (1970a, Fig. 1b p.9) with permission from the publishers via CCC 37001813366238.
• Figure 3.11 (left) reproduced from Gawiser and Silk (2000, Fig. 4 p.17) with permission from the publishers via CCC 3700450644616.
• Figure 4.3 adapted by permission from Macmillan Publishers Ltd: Massey et al. (2007, Nature) ©2007. CCC licence 3730161227530, article æ6.
• Figure 27.3 reproduced from Sherwin et al. (2011, Figure 3) with permission from the publishers via CCC 3721150136987. ©2011 by the American Physical Society.
In all cases the full title of the article and other bibliographic data are available from the corresponding entry in the References section. ‘CCC’ refers to permissions gained through the Copyright Clearance Centre and the number following is the granted licence number. These cover situations where the authors’ permission was required but was not available.
NASA copyright policy states that ‘NASA material is not protected by copyright unless noted’. Thus Figure 3.1 is in the public domain.
Unless otherwise noted, images and video on Laser Interferometer Gravitational-wave Observatory (LIGO) public web sites (public sites ending with a ligo.caltech.edu or ligo.mit.edu address) may be used for any purpose without prior permission. See Figure 4.6.
Figure 3.10: The Particle Data Group publishes annual Reports on Particle Physics (RPP which are published in the journal Chinese Physics, C. Since 2014, the figures from RPP are in the public domain (Olive and Particle Data Group, 2014) and author permission is automatically granted. This concerns Figure 27.1 of Scott and Smoot (2014).
Notation and Conventions
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp xiii-xiv
-
- Chapter
- Export citation
Appendix B - Magnitudes and Distances
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 679-681
-
- Chapter
- Export citation
-
Summary
The distance to an astronomical source is a fundamental astronomical datum about the source. While astronomers measure sizes of objects in centimeters, meters or kilometers, distances to distant objects are measured in light years or parsecs. Large objects like galaxies and clusters of galaxies are also measured in light years or parsecs.
The light year is simply the distance travelled by light in one year and it is a fairly convenient unit of measure for the distances to stars. However, distances to stars are not measured using the travel time of light rays: nearby stellar distances are measured using parallax. The parallax is the maximum apparent angular shift in position of a star on the sky as the Earth goes from one side of its orbit to the other. This is the same as the angle subtended by the Earth's orbit as viewed from the star.
The parsec is the distance at which the parallax of an object would be 1 arc second. The parallax of the nearest star, 4.2 light years away, is 0.772 arcsec. The relationship between light years and parsecs is 1 parsec = 3.26 light years. Distances to cosmological objects are to be measured in Megaparsecs (Mpc).
Because of the curvature of space the distance as measured by parallax is not the same as the distances measured by brightness, or by diameter.
Magnitudes and all that
It is difficult to avoid discussing the apparent brightness of objects such as stars and galaxies without encountering one of astronomy's major idiosyncrasies – the magnitude scale. Ever since the time of Hipparchus of Niacea (c190BC–120BC), the brightnesses of stars have been measured relative to one another on a logarithmic scale. Around 150BC, Hipparchus had classified the brightest stars he could see as being of ‘magnitude 1’ and the faintest as being of ‘magnitude 6’: a change of 5 magnitudes represented a factor of 100 in brightness. Because the brain perceives increments in brightness logarithmically, each increment of 1 magnitude corresponded to a change in brightness by a factor of 10⅕ = 2.512.
The magnitude scale, as we know it today, was introduced by Pogson (1856). Nowadays we might decide to base a logarithmic brightness scale on powers of two, so that each ‘magnitude’ was twice as bright as its predecessor.
Appendix C - Representing Vectors and Tensors
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 682-684
-
- Chapter
- Export citation
Appendix A - SI, CGS and Planck Units
- Bernard J. T. Jones, Rijksuniversiteit Groningen, The Netherlands
-
- Book:
- Precision Cosmology
- Published online:
- 04 May 2017
- Print publication:
- 20 April 2017, pp 673-678
-
- Chapter
- Export citation
-
Summary
Systems of units are always a problem, despite international agreement to standardise on the SI system. The old cgs system and its multitude of mutually inconsistent variants (Gauss, Lorentz–Heaviside, esu, emu, etc.) is still in use. This is particularly an issue when it comes to using the Maxwell equations. In astrophysics we generally see a mixture of cgs units and subject-specific units, like the magnitude scale for brightness, parsecs for distance, and flux units in the older radio astronomy literature. This is what we might call the Astrophysical System of Units and, like many contemporary authors, I have adhered to that in this text. However, certain issues must be clarified, and in particular the conventions used in the Maxwell equations.
This appendix provides translation between Gaussian cgs units and the SI system for several of the quantities that are used in the text, and provides some update on the values of the fundamental physical constants that have occurred since the decision to fix the speed of light at a given numerical value. The Maxwell equations have a small section to themselves.
SI, MKS and cgs
There has been a slow move in astronomy away from the traditional centimetre-gram-second (CGS) based system of units that evolved in the 19th century towards the metric metre-kilogram-second (MKS) based system of units and the subsequent International System (SI), which is also known as the MKSA system. However, the change has been slow or, as in the case of astronomy, only partial. By the 1950s many fields had adopted their own subject-specific units, chosen largely because they were particularly convenient or simply entrenched in the culture of the discipline. There are perhaps just two reasons for the slowness of this change: teachers today were often taught in cgs or hybrid systems, and used textbooks with which they were familiar. Moving from cgs to SI units takes people out of their comfort zone.
The full details of the system of physical units are maintained and documented by the National Institute of Standards and Technology, NIST, which is maintained by the US
Department of Commerce. The information presented here is for convenience and is largely abstracted from the NIST web site.